Aligning RecSys

My thesis project for my CS degree . Here's what GPT3 has to say about it:

Recommendation systems (RS) are used to help people find things on the internet. Sometimes they recommend things you might like, and sometimes they recommend things that are good for them.
But most of the time, the people who make these recommendation systems want to make money, so they try to get you to buy or do things that will make them money. They want to make you happy, but they also want to make money. So sometimes they make recommendations that are good for you, and sometimes they make recommendations that are bad for you.

Recommendation systems (RS) are misaligned with their users, they optimize for metrics like retention, or ad-clicks. I want to use social dilemmas in networks of agents mediated by self-interested RS to understand the impacts of RS on the self-organization of such networks.

I turned in my proposal in July 2020, so the next step is running a many simulations.